-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add dice evaluation metric #225
Conversation
Codecov Report
@@ Coverage Diff @@
## master #225 +/- ##
==========================================
+ Coverage 84.35% 84.58% +0.22%
==========================================
Files 90 90
Lines 4340 4385 +45
Branches 687 701 +14
==========================================
+ Hits 3661 3709 +48
+ Misses 537 536 -1
+ Partials 142 140 -2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
mmseg/datasets/custom.py
Outdated
|
||
class_table_data.append( | ||
[class_names[i]] + | ||
[round(m[i] * 100, 2) for m in ret_metrics[2:]] + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest using np.round
once for all.
mmseg/datasets/custom.py
Outdated
class_table_data.append( | ||
[class_names[i]] + | ||
[round(m[i] * 100, 2) for m in ret_metrics[2:]] + | ||
[round(ret_metrics[1][i] * 100, 2)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this order? Will [1:]
work?
mmseg/datasets/custom.py
Outdated
@@ -315,57 +315,58 @@ def evaluate(self, results, metric='mIoU', logger=None, **kwargs): | |||
|
|||
Args: | |||
results (list): Testing results of the dataset. | |||
metric (str | list[str]): Metrics to be evaluated. | |||
metric (str | list[str]): Metrics to be evaluated. 'mIoU' and | |||
'mDice' are support ONLY. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'mDice' are support ONLY. | |
'mDice' are supported. |
mmseg/datasets/custom.py
Outdated
[np.round(m[i] * 100, 2) for m in ret_metrics[2:]] + | ||
[np.round(ret_metrics[1][i] * 100, 2)]) | ||
summary_table_data = [['Scope'] + | ||
['m' + head | ||
for head in class_table_data[0][1:]] + ['aAcc']] | ||
summary_table_data.append( | ||
['global'] + | ||
[np.round(np.nanmean(m) * 100, 2) for m in ret_metrics[2:]] + | ||
[np.round(np.nanmean(ret_metrics[1]) * 100, 2)] + | ||
[np.round(np.nanmean(ret_metrics[0]) * 100, 2)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may use np.round
once for all.
* add dice evaluation metric * add dice evaluation metric * add dice evaluation metric * support 2 metrics * support 2 metrics * support 2 metrics * support 2 metrics * fix docstring * use np.round once for all
* Update README for 0.2.3 release: * Apply suggestions from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* linting * polish * polish * polish * polish * polish * polish * update changelog
add dice evaluation metric